Goto

Collaborating Authors

 open letter


How California's New AI Law Protects Whistleblowers

TIME - Tech

Booth is a reporter at TIME. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Booth is a reporter at TIME. CEOs of the companies racing to build smarter AI--Google DeepMind, OpenAI, xAI, and Anthropic--have been clear about the stakes.


Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'

FOX News

Sen. Ted Cruz, R-Texas, joins'America's Newsroom' along with 15-year-old AI deep fake victim Elliston Berry to discuss the importance of the'Take It Down' bill, warning the issue is'rising every day.' Evangelical leaders praised President Donald Trump for his leadership on artificial intelligence ("AI") in an open letter published last week, while cautioning him to ensure the technology is developed responsibly. Dubbing Trump the "AI President," the religious leaders wrote that they believe Trump is there by "Divine Providence" to guide the world on the future of AI. The signatories said they are "pro-science" and fully support the advancement of technology which benefits their own ministries around the world. "We are also pro-economic prosperity and economic leadership for America and our friends. We do not want to see the AI revolution slowing, but we want to see the AI revolution accelerating responsibly," the letter says.


British authors want Meta to answer for alleged copyright infringement

Engadget

A March 20 article in The Atlantic served as the letter's impetus. It reported that Meta had used LibGen, a pirated collection of over 7.5 million books, to train its AI models. Anyone on the internet over the last few weeks has likely seen videos of distraught authors learning that their work is available on the database (and potentially used by Meta without their permission). A lawsuit in the US alleges Meta CEO Mark Zuckerberg approved the use of LibGen's data to train its AI. The lawsuit's plaintiffs include writers Sarah Silverman and Ta-Nehisi Coates.


'The Stakes Are Incredibly High.' Two Former OpenAI Employees On the Need for Whistleblower Protections

TIME - Tech

This could be a costly interview for William Saunders. The former safety researcher resigned from OpenAI in February, and--like many other departing employees--signed a non-disparagement agreement in order to keep the right to sell his equity in the company. Although he says OpenAI has since told him that it does not intend to enforce the agreement, and has made similar public commitments, he is still taking a risk by speaking out. "By speaking to you I might never be able to access vested equity worth millions of dollars," he tells TIME. "But I think it's more important to have a public dialogue about what is happening at these AGI companies."


OpenAI and Google DeepMind workers warn of AI industry risks in open letter

The Guardian

A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers. The letter, which calls for a "right to warn about artificial intelligence", is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic. "AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the letter states. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."


'The outcome could be extinction': Elon Musk-backed researcher warns there is NO proof AI can be controlled - and says tech should be shelved NOW

Daily Mail - Science & tech

A researcher backed by Elon Musk is re-sounding the alarm about AI's threat to humanity after finding no proof the tech can be controlled. Dr Roman V Yampolskiy, an AI safety expert, has received funding from the billionaire to study advanced intelligent systems that is the focus on his upcoming book'AI: Unexplainable, Unpredictable, Uncontrollable. The book examines how AI has the potential to dramatically reshape society, not always to our advantage, and has the'potential to cause an existential catastrophe.' Yampsolskiy, who is a professor at the University of Louisville, conducted an'examination of the scientific literature on AI' and concluded there is no proof that the tech could be stopped from going rogue. To fully control AI, he suggested that it needs to be modifiable with'undo' options, limitable, transparent, and easy to understand in human language.


Sam Altman appears to admit the existence of a secret new doomsday AI system he helped build - that could be the leap to artificial general intelligence

Daily Mail - Science & tech

Sam Altman has appeared to lead credence to the theory he was fired from OpenAI over his company's super powerful, secret new AI system he helped build. Multiple employees reportedly warned the company's board of directors that this project, named Q* (pronounced'Q star'), was becoming so advanced it could already pass math exams and perform critical thinking tasks. And they felt Altman was not taking their warnings seriously. In an interview this week, Altman did not deny the existence of the secret program that some employees said was responsible for his firing. Instead, he called the revelation of Q* an'unfortunate leak.' Altman was fired, then hired by OpenAI investor Microsoft, and then re-hired by OpenAI - which also gave the boot to most of the board that cut Altman loose - all over the course of just five days in November.


OpenAI staff threaten mass walkout unless Sam Altman is reinstated

The Guardian

Hundreds of OpenAI staff members have threatened to quit en masse if the board overseeing the ChatGPT developer does not reinstate its ousted chief executive Sam Altman and then step down. In an open letter, 550 of OpenAI's 700 employees demanded the resignation of the board and said they might walk out if Altman is not brought back. Altman was fired on Friday in a move that shocked Silicon Valley and angered most of the company's employees. The letter to OpenAI's four remaining board directors says: "Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgment and care for our mission and employees."


AI Experts Call For Policy Action to Avoid Extreme Risks

TIME - Tech

On Tuesday, 24 AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document had a particular focus on extreme risks posed by the most advanced systems, such as enabling large-scale criminal or terrorist activities. The paper makes a number of concrete policy recommendations, such as ensuring that major tech companies and public funders devote at least one-third of their AI R&D budget to projects that promote safe and ethical use of AI. The authors also call for the creation of national and international standards. Bengio, scientific director at the Montreal Institute for Learning Algorithms, says that the paper aims to help policymakers, the media, and the general public "understand the risks, and some of the things we have to do to make [AI] systems do what we want."


Geoffrey Hinton, dubbed the 'Godfather of AI,' warns technology will be smarter than humans in five years

Daily Mail - Science & tech

The'Godfather of AI' has warned the tech will be smarter than humans in some ways by the end of the decade - and he believes it will ultimately destroy humanity. In a doom-laden interview with 60 Minutes, Geoffrey Hinton, 75, predicted that in five years, the systems will be surpass human intelligence that would lead to the rise of'killer robots,' fake news and a boom in unemployment. Hinton is a former Google executive credited with creating the technology that became the bedrock of systems like ChatGPT and Google Bard. He recently revealed his fears that the technology could go rogue and write its own code, allowing it to modify itself. While the scientist fears many aspects of the technology, he said AI has huge benefits in healthcare, such as designing drugs and recognizing medical issues.